31 research outputs found
A distributed adaptive steplength stochastic approximation method for monotone stochastic Nash Games
We consider a distributed stochastic approximation (SA) scheme for computing
an equilibrium of a stochastic Nash game. Standard SA schemes employ
diminishing steplength sequences that are square summable but not summable.
Such requirements provide a little or no guidance for how to leverage
Lipschitzian and monotonicity properties of the problem and naive choices
generally do not preform uniformly well on a breadth of problems. While a
centralized adaptive stepsize SA scheme is proposed in [1] for the optimization
framework, such a scheme provides no freedom for the agents in choosing their
own stepsizes. Thus, a direct application of centralized stepsize schemes is
impractical in solving Nash games. Furthermore, extensions to game-theoretic
regimes where players may independently choose steplength sequences are limited
to recent work by Koshal et al. [2]. Motivated by these shortcomings, we
present a distributed algorithm in which each player updates his steplength
based on the previous steplength and some problem parameters. The steplength
rules are derived from minimizing an upper bound of the errors associated with
players' decisions. It is shown that these rules generate sequences that
converge almost surely to an equilibrium of the stochastic Nash game.
Importantly, variants of this rule are suggested where players independently
select steplength sequences while abiding by an overall coordination
requirement. Preliminary numerical results are seen to be promising.Comment: 8 pages, Proceedings of the American Control Conference, Washington,
201
Improved guarantees for optimal Nash equilibrium seeking and bilevel variational inequalities
We consider a class of hierarchical variational inequality (VI) problems that
subsumes VI-constrained optimization and several other important problem
classes including the optimal solution selection problem, the optimal Nash
equilibrium (NE) seeking problem, and the generalized NE seeking problem. Our
main contributions are threefold. (i) We consider bilevel VIs with merely
monotone and Lipschitz continuous mappings and devise a single-timescale
iteratively regularized extragradient method (IR-EG). We improve the existing
iteration complexity results for addressing both bilevel VI and VI-constrained
convex optimization problems. (ii) Under the strong monotonicity of the outer
level mapping, we develop a variant of IR-EG, called R-EG, and derive
significantly faster guarantees than those in (i). These results appear to be
new for both bilevel VIs and VI-constrained optimization. (iii) To our
knowledge, complexity guarantees for computing the optimal NE in nonconvex
settings do not exist. Motivated by this lacuna, we consider VI-constrained
nonconvex optimization problems and devise an inexactly-projected gradient
method, called IPR-EG, where the projection onto the unknown set of equilibria
is performed using R-EG with prescribed adaptive termination criterion and
regularization parameters. We obtain new complexity guarantees in terms of a
residual map and an infeasibility metric for computing a stationary point. We
validate the theoretical findings using preliminary numerical experiments for
computing the best and the worst Nash equilibria